16 research outputs found
Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy
In recent years, endomicroscopy has become increasingly used for diagnostic
purposes and interventional guidance. It can provide intraoperative aids for
real-time tissue characterization and can help to perform visual investigations
aimed for example to discover epithelial cancers. Due to physical constraints
on the acquisition process, endomicroscopy images, still today have a low
number of informative pixels which hampers their quality. Post-processing
techniques, such as Super-Resolution (SR), are a potential solution to increase
the quality of these images. SR techniques are often supervised, requiring
aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to
train a model. However, in our domain, the lack of HR images hinders the
collection of such pairs and makes supervised training unsuitable. For this
reason, we propose an unsupervised SR framework based on an adversarial deep
neural network with a physically-inspired cycle consistency, designed to impose
some acquisition properties on the super-resolved images. Our framework can
exploit HR images, regardless of the domain where they are coming from, to
transfer the quality of the HR images to the initial LR images. This property
can be particularly useful in all situations where pairs of LR/HR are not
available during the training. Our quantitative analysis, validated using a
database of 238 endomicroscopy video sequences from 143 patients, shows the
ability of the pipeline to produce convincing super-resolved images. A Mean
Opinion Score (MOS) study also confirms this quantitative image quality
assessment.Comment: Accepted for publication on Medical Image Analysis journa
Red-Eyes Removal through Cluster-Based Boosting on Gray Codes
Since the large diffusion of digital camera and mobile devices with embedded camera and flashgun, the redeyes artifacts have de facto become a critical problem. The technique herein described makes use of three main steps to identify and remove red eyes. First, red-eye candidates are extracted from the input image by using an image filtering pipeline. A set of classifiers is then learned on gray code features extracted in the clustered patches space and hence employed to distinguish between eyes and non-eyes patches. Specifically, for each cluster the gray code of the red-eyes candidate is computed and some discriminative gray code bits are selected employing a boosting approach. The selected gray code bits are used during the classification to discriminate between eye versus non-eye patches. Once red-eyes are detected, artifacts are removed through desaturation and brightness reduction. Experimental results on a large dataset of images demonstrate the effectiveness of the proposed pipeline that outperforms other existing solutions in terms of hit rates maximization, false positives reduction, and quality measure
Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction
Purpose: Probe-based Confocal Laser Endomicroscopy (pCLE) is a recent imaging
modality that allows performing in vivo optical biopsies. The design of pCLE
hardware, and its reliance on an optical fibre bundle, fundamentally limits the
image quality with a few tens of thousands fibres, each acting as the
equivalent of a single-pixel detector, assembled into a single fibre bundle.
Video-registration techniques can be used to estimate high-resolution (HR)
images by exploiting the temporal information contained in a sequence of
low-resolution (LR) images. However, the alignment of LR frames, required for
the fusion, is computationally demanding and prone to artefacts. Methods: In
this work, we propose a novel synthetic data generation approach to train
exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced
quality are recovered by the models trained on pairs of estimated HR images
(generated by the video-registration algorithm) and realistic synthetic LR
images. Performance of three different state-of-the-art DNNs techniques were
analysed on a Smart Atlas database of 8806 images from 238 pCLE video
sequences. The results were validated through an extensive Image Quality
Assessment (IQA) that takes into account different quality scores, including a
Mean Opinion Score (MOS). Results: Results indicate that the proposed solution
produces an effective improvement in the quality of the obtained reconstructed
image. Conclusion: The proposed training strategy and associated DNNs allows us
to perform convincing super-resolution of pCLE images
DeepBrainPrint: A Novel Contrastive Framework for Brain MRI Re-Identification
Recent advances in MRI have led to the creation of large datasets. With the
increase in data volume, it has become difficult to locate previous scans of
the same patient within these datasets (a process known as re-identification).
To address this issue, we propose an AI-powered medical imaging retrieval
framework called DeepBrainPrint, which is designed to retrieve brain MRI scans
of the same patient. Our framework is a semi-self-supervised contrastive deep
learning approach with three main innovations. First, we use a combination of
self-supervised and supervised paradigms to create an effective brain
fingerprint from MRI scans that can be used for real-time image retrieval.
Second, we use a special weighting function to guide the training and improve
model convergence. Third, we introduce new imaging transformations to improve
retrieval robustness in the presence of intensity variations (i.e. different
scan contrasts), and to account for age and disease progression in patients. We
tested DeepBrainPrint on a large dataset of T1-weighted brain MRIs from the
Alzheimer's Disease Neuroimaging Initiative (ADNI) and on a synthetic dataset
designed to evaluate retrieval performance with different image modalities. Our
results show that DeepBrainPrint outperforms previous methods, including simple
similarity metrics and more advanced contrastive deep learning frameworks
Jbig for printer pipelines a compression test
The proposed paper describes a compression test analysis of JBIG standard algorithm. The aim of such work is to proof the effectiveness of this standard for images acquired through scanners and processed into a printer pipeline. The main issue of printer pipelines is the necessity to use a memory buffer to store scanned images for multiple prints. This work demonstrates that for very large scales the buffer can be fixed using medium compression case, using multiple scans in case of uncommon random patterns.
Document type: Part of book or chapter of boo
CONTENT-AWARE IMAGE RESIZING WITH SEAM SELECTION BASED ON GRADIENT VECTOR FLOW
Content-aware image resizing is an effective technique that allows to take into account the visual content of images during the resizing process. The basic idea beyond these algorithms is the resizing of an image by considering vertical and/or horizontal paths of pixels (i.e., seams) which contain low salient information. In this paper we exploit the Gradient Vector Flow (GVF) of the image to establish the paths to be considered during the resizing. The relevance of each path is derived from a saliency map obtained by considering the magnitude of the GVF associated to the image under consideration. The proposed technique has been tested, both qualitatively and quantitatively, by considering a representative set of images labeled with corresponding salient objects (i.e., ground-truth maps). Experimental results demonstrate that our method preserves crucial salient regions better than other state-of-the-art algorithms
Image Resizing Based on Gradient Vector Flow Analysis
Content-aware image resizing is an effective technique that allows to take into account the visual content of images during the resizing process. The basic idea beyond these algorithms is the resizing of an image by considering vertical and/or horizontal paths of pixels (i.e., seams) which contain low salient information. In this paper we exploit the Gradient Vector Flow (GVF) of the image to establish the paths to be considered during the resizing. The relevance of each path is derived from a saliency map obtained by considering the magnitude of the GVF associated to the image under consideration. The proposed technique has been tested, both qualitatively and quantitatively, by considering a representative set of images labeled with corresponding salient objects (i.e., ground-truth maps). Experimental results demonstrate that our method preserves crucial salient regions better than other state-of-the-art algorithms